3 research outputs found

    A User-Adaptive Automated DJ Web App with Object-Based Audio and Crowd-Sourced Decision Trees

    Get PDF
    We describe the concepts behind a web-based minimal-UI DJ system that adapts to the user’s preference via sim- ple interactive decisions and feedback on taste. Starting from a preset decision tree modeled on common DJ prac- tice, the system can gradually learn a more customised and user-specific tree. At the core of the system are structural representations of the musical content based on semantic au- dio technologies and inferred from features extracted from the audio directly in the browser. These representations are gradually combined into a representation of the mix which could then be saved and shared with other users. We show how different types of transitions can be modeled using sim- ple musical constraints. Potential applications of the system include crowd-sourced data collection, both on temporally aligned playlisting and musical preference

    Exploring Real-time Visualisations to Support Chord Learning with a Large Music Collection

    Get PDF
    A common problem in music education is finding varied and engaging material that is suitable for practising a specific musical concept or technique. At the same time, a number of large music collections are available under a Creative Commons (CC) licence (e.g. Jamendo, ccMixter), but their potential is largely untapped because of the relative obscurity of their content. In this paper, we present *Jam with Jamendo*, a web application that allows novice and expert learners of musical instruments to query songs by chord content from a large music collection, and practise the chords present in the retrieved songs by playing along. Its goal is twofold: the learners get a larger variety of practice material, while the artists receive increased exposure. We experimented with two visualisation modes. The first is a linear visualisation based on a moving time axis, the second is a circular visualisation inspired by the chromatic circle. We conducted a small-scale thinking-aloud user study with seven participants based on a hands-on practice with the web app. Through this pilot study, we obtained a qualitative understanding of the potentials and challenges of each visualisation, which will be used to inform the next design iteration of the web app

    pywebaudioplayer: Bridging the gap between audio processing code and attractive visualisations based on web technology

    No full text
    Lately, a number of audio players based on web technology have made it possible for researchers to present their audio-related work in an attractive manner. Tools such as "wavesurfer.js", "waveform-playlist" and "trackswitch.js" provide highly-configurable players, allowing a more interactive exploration of scientific results that goes beyond simple linear playback. However, the audio output to be presented is in many cases not generated by the same web technologies. The process of preparing audio data for display therefore requires manual intervention, in order to bridge the resulting gap between programming languages. While this is acceptable for one-time events, such as the preparation of final results, it prevents the usage of such players during the iterative development cycle. Having access to rich audio players already during development would allow researchers to get more instantaneous feedback. The current workflow consists of repeatedly importing audio into a digital audio workstation in order to achieve similar capabilities, a repetitive and time-consuming process. In order to address these needs, we present "pywebaudioplayer", a Python package that automates the generation of code snippets for the each of the three aforementioned web audio players. It is aimed at use-cases where audio development in Python is combined with web visualisation. Notable examples are "Jupyter Notebook" and WSGI-compatible web frameworks such as "Flask" or "Django"
    corecore